CSE408
Lower bound theory
Lecture # 31
Lower bound
We always consider the algorithm with
smaller order of complexity as the better.
How can we ensure that is there any faster
method available
We need to find a function g(n) which is a
lower bound on the time that algorithm must
take, So if f(n) is the time for some algorithm
then we can write f(n)=Ω(g(n) where g(n) is
lower bound for f(n)
Lower bound theory
Or we can say f(n) >= c * g(n) for all n>n0
Where c & n0 are constants
For many problems it is easy to calculate the
lower bound on n inputs
e.g. consider the set of problems to find the
maximum of an ordered set of n integers. Clearly
every integer must be examined at least once. So
Ω(n) is a lower bound for that. For matrix
multiplication we have 2n2inputs so the lower
bound can be Ω(n2)
Sorting and Searching
For all sorting & searching we use comparison
trees for finding the lower bound.
For an unordered set the searching algorithm will
take Ω(n) as the lower bound. For an ordered set
it will take
Ω(logn) as the lower bound. Similarly all the
sorting algorithms can not sort in less then
Ω(nlogn) time so Ω(nlogn) is the lower bound for
sorting algorithms.
Oracle and adversary arguments
Given some computational model, the oracle tells
the outcome of each comparison. In order to
derive a good lower bound, the oracle tries its
best to cause the algorithm to work as hard as it
might.
Take the example of a tournament where the
values are called players and the largest value is
interpreted as the winner and the second largest
is taken as the runner up. So the problem is the
similar to finding the largest and the second
largest number out of a set of n numbers.
Second largest
Since we can’t determine the second largest
element without having determined the
largest element, at least n-1 comparisons are
necessary. We need to show that there is
always some sequence of comparisons which
forces the second largest to be fund in (logn)-1
additional comparisons
Tournament theory
The winner of the tournament has played x matches then
there are x people who are candidates for the runner up
position. The runner up has lost only once to the winner
and other x-1 persons must have lost to one other person.
We can produce a oracle which decides the results of
matches in such a way that winner plays logn other people.
In a match between a and b the oracle declares a as the
winner if a is previously undefeated and b has lost at least
once or if both a and b are undefeated but a has won more
match then b. In any other case the oracle can decide
arbitrarily as long as it remains consistent.
Winning match
Corresponding to this tournament imagine
drawing a directed graph with n vertices. Each
vertex corresponds to one of the n players.
Draw a directed edge from vertex b to a iff
either player a has defeated b or a has
defeated another player who has defeated b.
Since for the overall winner there must be an
edge from each of the remaining n-1 vertices,
it follows that the winner must have played at
least logn matches.
Increasing matrix
M is an nXn integer matrix in which the keys in
each row are in increasing order. Consider the
problem of finding the position of an integer x
in M. Determine a lower bound for the no. of
comparisons to be done in the problem.
CSE408
Back tracking Algorithm
Lecture # 32
Backtracking
Backtracking is a technique
used to solve problems with
a large search space, by
systematically trying and
eliminating possibilities.
A standard example of
backtracking would be going
through a maze.
At some point in a maze, you
might have two options of
which direction to go:
Portion A
Portion B
Portion B
Portion A
One strategy would be
to try going through
Portion A of the maze.
If you get stuck before
you find your way out,
then you "backtrack"
to the junction.
At this point in time you
know that Portion A
will NOT lead you out
of the maze,
so you then start
searching in Portion B
Backtracking
Clearly, at a single junction
you could have even more
than 2 choices.
The backtracking strategy
says to try each choice, one
after the other,
if you ever get stuck,
"backtrack" to the junction
and try the next choice.
If you try all choices and
never found a way out,
then there IS no solution to
the maze.
B
C
A
Backtracking
Backtracking Eight Queens Problem
Find an arrangement of 8
queens on a single chess board
such that no two queens are
attacking one another.
In chess, queens can move all
the way down any row, column
or diagonal (so long as no
pieces are in the way).
Due to the first two restrictions,
it's clear that each row and
column of the board will have
exactly one queen.
The backtracking strategy is as
follows:
1) Place a queen on the first
available square in row 1.
2) Move onto the next row,
placing a queen on the first
available square there (that
doesn't conflict with the
previously placed queens).
3) Continue in this fashion until
either:
a) you have solved the problem, or
b) you get stuck.
When you get stuck, remove the
queens that got you there, until you
get to a row where there is another
valid square to try.
Animated Example:
http://www.hbmeyer.de/backtrack
/achtdamen/eight.htm#up
Q
Q
Q
Q
Q
Q
Continue…
Backtracking Eight Queens Problem
When we carry out backtracking, an easy
way to visualize what is going on is a tree
that shows all the different possibilities
that have been tried.
On the board we will show a visual
representation of solving the 4 Queens
problem (placing 4 queens on a 4x4 board
where no two attack one another).
Backtracking Eight Queens Problem
The neat thing about coding up backtracking,
is that it can be done recursively, without
having to do all the bookkeeping at once.
Instead, the stack or recursive calls does most of
the bookkeeping
(ie, keeping track of which queens we've placed,
and which combinations we've tried so far, etc.)
Backtracking Eight Queens Problem
void solveItRec(int perm[], int location, struct onesquare usedList[]) {
if (location == SIZE) {
printSol(perm);
}
for (int i=0; i<SIZE; i++) {
if (usedList[i] == false) {
if (!conflict(perm, location, i)) {
perm[location] = i;
usedList[i] = true;
solveItRec(perm, location+1, usedList);
usedList[i] = false;
}
}
}
}
perm[] - stores a valid permutation of queens from index 0 to location-1.
location the column we are placing the next queen
usedList[] keeps track of the rows in which the queens have already been placed.
Found a solution to the problem, so print it!
Loop through possible rows to place this queen.
Only try this row if it hasn’t been used
Check if this position conflicts with any previous
queens on the diagonal
1) mark the queen in this row
2) mark the row as used
3) solve the next column location
recursively
4) un-mark the row as used, so we
can get ALL possible valid
solutions.
Backtracking 8 queens problem - Analysis
Another possible brute-force algorithm is generate the permutations of
the numbers 1 through 8 (of which there are 8! = 40,320),
and uses the elements of each permutation as indices to place a queen on
each row.
Then it rejects those boards with diagonal attacking positions.
The backtracking algorithm, is a slight improvement on the permutation
method,
constructs the search tree by considering one row of the board at a time,
eliminating most non-solution board positions at a very early stage in their
construction.
Because it rejects row and diagonal attacks even on incomplete boards, it
examines only 15,720 possible queen placements.
A further improvement which examines only 5,508 possible queen
placements is to combine the permutation based method with the early
pruning method:
The permutations are generated depth-first, and the search space is pruned
if the partial permutation produces a diagonal attack
CSE408
Sub set sum , Assignment
problem
Lecture # 34
Subset problem
Sub set sum Algo
Branch and Bound
Branch and bound
Assignment problem
Assignment problem
Exmaple
CSE408
Knapsack problem
Lecture # 35
Dynamic Programming
Dynamic Programming is a general algorithm design technique
for solving problems defined by or formulated as recurrences with
overlapping sub instances
Invented by American mathematician Richard Bellman in the
1950s to solve optimization problems and later assimilated by CS
“Programminghere means “planning
Main idea:
- set up a recurrence relating a solution to a larger instance to
solutions of some smaller instances
- solve smaller instances once
- record solutions in a table
- extract solution to the initial instance from that table
Knapsack Problem by DP
Given n items of
integer weights: w
1
w
2
w
n
values: v
1
v
2
v
n
a knapsack of integer capacity W
find most valuable subset of the items that fit into the
knapsack
Consider instance defined by first i items and capacity
j (j W).
Let V[i,j] be optimal value of such an instance. Then
max {V[i-1,j], v
i
+ V[i-1,j- w
i
]} if j- w
i
0
V[i,j] =
V[i-1,j] if j- w
i
< 0
Initial conditions: V[0,j] = 0 and V[i,0] = 0
{
Knapsack Problem by DP (example)
Example: Knapsack of capacity W = 5
item weight value
1 2 $12
2 1 $10
3 3 $20
4 2 $15 capacity j
0 1 2 3 4 5
0
w
1
= 2, v
1
=
12 1
w
2
= 1, v
2
=
10 2
w
3
= 3, v
3
=
20 3
w
4
= 2, v
4
=
15 4
0 0 0
0 0 12
0 10 12
22 22 22
0 10 12 22 30 32
0 10 15 25 30 37
Backtracing
finds the actual
optimal subset,
i.e. solution.
Knapsack Problem by DP (pseudocode)
Algorithm DPKnapsack(w[1..n], v[1..n], W)
var V[0..n,0..W], P[1..n,1..W]: int
for j := 0 to W do
V[0,j] := 0
for i := 0 to n do
V[i,0] := 0
for i := 1 to n do
for j := 1 to W do
if w[i] j and v[i] + V[i-1,j-w[i]] > V[i-1,j] then
V[i,j] := v[i] + V[i-1,j-w[i]]; P[i,j] := j-w[i]
else
V[i,j] := V[i-1,j]; P[i,j] := j
return V[n,W] and the optimal subset by backtracing
Running time and space:
O(nW).
CSE408
TSP
Lecture # 36
Traveling Salesman Problem
For each city i, 1≤ i ≤ n, find the sum si of the
distances from city i to the two nearest cities;
compute the sums of these n numbers, divide
the result by 2, and, if all the distances are
integers,round up the result to the nearest
integer:
Traveling Salesman Problem
Edge(a,d) and (d,a)
Example